A Model of Recurrent Neural Networks that Learn State-Transitions of Finite State Transducers

نویسنده

  • Itsuki NODA
چکیده

A model, called the `SGH' model, and its learning method are proposed. While simple recurrent networks (SRN) and nite state transducers (FST) have similar structures, their learning are quite di erent, so that SRNs can not acquire suitable state-transitions through conventional learning. The proposed model and method construct an SRN that has suitable state-transitions for a given task. In order to derive the model and the method, a procedure to construct an FST from examples of inputoutput is composed using the state-minimization technique. This procedure consists of three steps, the `keeping input history' step, the `grouping states' step, and the `constructing state-transitions' step. Then each step is reconstructed as learning of a neural network. Finally, three networks are combined into the `SGH' model. Experiments show that the `SGH' model can learn suitable state transitions for given tasks. Experiments also show that it increases the ability to process temporal sequences with LDDs by SRNs.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Recurrent Neural Networks Learn Deterministic Representations of Fuzzy Finite-State Automata

The paradigm of deterministic nite-state automata (DFAs) and their corresponding regular languages have been shown to be very useful for addressing fundamental issues in recurrent neural networks. The issues that have been addressed include knowledge representation, extraction, and reenement as well development of advanced learning algorithms. Recurrent neural networks are also very promising t...

متن کامل

Evolutionary Training of Hybrid Systems of Recurrent Neural Networks and Hidden Markov Models

We present a hybrid architecture of recurrent neural networks (RNNs) inspired by hidden Markov models (HMMs). We train the hybrid architecture using genetic algorithms to learn and represent dynamical systems. We train the hybrid architecture on a set of deterministic finite-state automata strings and observe the generalization performance of the hybrid architecture when presented with a new se...

متن کامل

Learning a class of large finite state machines with a recurrent neural network

-One o f the issues in any learning model is how it scales with problem size. The problem o f learning finite state machine (FSMs) from examples with recurrent neural networks has been extensively explored. However, these results are somewhat disappointing in the sense that the machines that can be learned are too small to be competitive with existing grammatical inference algorithms. We show t...

متن کامل

Machine Translation using Neural Networks and Finite-State Models*

Both Neural Networks and Finite-State Models have recently proved to be encouraging approaches to Example-Based Machine Translation. This paper compares the translation performances achieved with the two techniques as well as the corresponding resources required. To this end, both Elman Simple Recurrent Nets and Subsequential Transducers were trained to tackle a simple pseudo-natural machine tr...

متن کامل

Modelling of Deterministic, Fuzzy and Probablistic Dynamical Systems

Recurrent neural networks and hidden Markov models have been the popular tools for sequence recognition problems such as automatic speech recognition. This work investigates the combination of recurrent neural networks and hidden Markov models into the hybrid architecture. This combination is feasible due to the similarity of the architectural dynamics of the two systems. Initial experiments we...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 1994